Goto

Collaborating Authors

 word predictor


LLMs are Not Just Next Token Predictors

Downes, Stephen M., Forber, Patrick, Grzankowski, Alex

arXiv.org Artificial Intelligence

LLMs are statistical models of language learning through stochastic gradient descent with a next token prediction objective. Prompting a popular view among AI modelers: LLMs are just next token predictors. While LLMs are engineered using next token prediction, and trained based on their success at this task, our view is that a reduction to just next token predictor sells LLMs short. Moreover, there are important explanations of LLM behavior and capabilities that are lost when we engage in this kind of reduction. In order to draw this out, we will make an analogy with a once prominent research program in biology explaining evolution and development from the genes eye view. LLMs are statistical models of language learning through stochastic gradient descent with a next token prediction objective. So, LLMs are'just next token predictors', a popular view among AI modelers, explicitly laid out by Shanahan (2024): "A great many tasks that demand intelligence in humans can be reduced to next-token prediction with a sufficiently performant model" (2024, 68), and "surely what they are doing is more than'just' next-token prediction? Well, it is an engineering fact that this is what an LLM does. The noteworthy thing is that next-token prediction is sufficient for solving previously unseen reasoning problems" (2024, 77).


Query-Response Interactions by Multi-tasks in Semantic Search for Chatbot Candidate Retrieval

Shi, Libin, Zhang, Kai, Rong, Wenge

arXiv.org Artificial Intelligence

Semantic search for candidate retrieval is an important yet neglected problem in retrieval-based Chatbots, which aims to select a bunch of candidate responses efficiently from a large pool. The existing bottleneck is to ensure the model architecture having two points: 1) rich interactions between a query and a response to produce query-relevant responses; 2) ability of separately projecting the query and the response into latent spaces to apply efficiently in semantic search during online inference. To tackle this problem, we propose a novel approach, called Multitask-based Semantic Search Neural Network (MSSNN) for candidate retrieval, which accomplishes query-response interactions through multi-tasks. The method employs a Seq2Seq modeling task to learn a good query encoder, and then performs a word prediction task to build response embeddings, finally conducts a simple matching model to form the dot-product scorer. Experimental studies have demonstrated the potential of the proposed approach.


Word Predictor from Handwritten Text – Towards Data Science

#artificialintelligence

It's been long since I contributed to the community. I am back to give what was due. But before that, let me tell you what I was up to all this time. The highlights of all these months professionally have been two things. One, I spoke at a data science conference in March (Mumbai edition of WiDS).


Building a Next Word Predictor in Tensorflow – Towards Data Science

@machinelearnbot

Next Word Prediction or what is also called Language Modeling is the task of predicting what word comes next. It is one of the fundamental tasks of NLP and has many applications. You might be using it daily when you write texts or emails without realizing it. I recently built a next word predictor on Tensorflow and in this blog I want to go through the steps I followed so you can replicate them and build your own word predictor. I used the text8 dataset which is en English Wikipedia dump from Mar 2006.